Accelerating interval matrix multiplication by mixed precision arithmetic
نویسندگان
چکیده
This paper is concerned with real interval arithmetic. We focus on interval matrix multiplication. Well-known algorithms for this purpose require the evaluation of several point matrix products to compute one interval matrix product. In order to save computing time we propose a method that modifies such known algorithm by partially using low-precision floating-point arithmetic. The modified algorithms work without significant loss of tightness of the computed interval matrix product but are about 30% faster than their corresponding original versions. The negligible loss of accuracy is rigorously estimated.
منابع مشابه
Tuning Technique for Multiple Precision Dense Matrix Multiplication using Prediction of Computational Time
Although reliable long precision floating-point arithmetic libraries such as QD and MPFR/GMP are necessary to solve ill-conditioned problems in numerical simulation, long precision BLAS-level computation such as matrix multiplication has not been fully optimized because tuning costs are very high compared to IEEE float and double precision arithmetic. In this study, we develop a technique to sh...
متن کاملOn fast matrix-vector multiplication with a Hankel matrix in multiprecision arithmetics
We present two fast algorithms for matrix-vector multiplication y = Ax, where A is a Hankel matrix. The current asymptotically fastest method is based on the Fast Fourier Transform (FFT), however in multiprecision arithmetics with very high accuracy FFT method is actually slower than schoolbook multiplication for matrix sizes up to n = 8000. One method presented is based on a decomposition of m...
متن کاملAccelerated Multiple Precision Matrix Multiplication using Strassen's Algorithm and Winograd's Variant
The Strassen algorithm and Winograd’s variant accelerate matrix multiplication by using fewer arithmetic operations than standard matrix multiplication. Although many papers have been published to accelerate singleas well as double-precision matrix multiplication by using these algorithms, no research to date has been undertaken to accelerate multiple precision matrix multiplication. In this pa...
متن کاملAutotuning Gemms for Fermi *
In recent years, the use of graphics chips has been recognized as a viable way of accelerating scientific and engineering applications, even more so since the introduction of the Fermi architecture by NVIDIA, with features essential to numerical computing, such as fast double precision arithmetic and memory protected with error correction codes. Being the crucial component of numerical software...
متن کاملAcceleration of a Preconditioning Method for Ill-Conditioned Dense Linear Systems by Use of a BLAS-based Method
We are interested in accurate numerical solutions of ill-conditioned linear systems using floating-point arithmetic. Recently, we proposed a preconditioning method to reduce the condition numbers of coefficient matrices. The method utilizes an LU factorization obtained in working precision arithmetic and requires matrix multiplication in quadruple precision arithmetic. In this note, we aim to a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015